29 JAN 2021 by ideonexus

 Web Browsers Shouldn't Have Features

Kay: Go to a blog, go to any Wiki, and find one that's WYSIWYG like Microsoft Word is. Word was done in 1984. HyperCard was 1989. Find me Web pages that are even as good as HyperCard. The Web was done after that, but it was done by people who had no imagination. They were just trying to satisfy an immediate need. There's nothing wrong with that, except that when you have something like the Industrial Revolution squared, you wind up setting de facto standards — in this case, really bad de fa...
Folksonomies: computing
Folksonomies: computing
  1  notes

Features should come from the objects they invoke from web sites.

08 JUL 2016 by ideonexus

 The Deletionist

The Deletionist is a concise system for automatically producing an erasure poem from any Web page. It systematically removes text to uncover poems, discovering a network of poems called “the Worl” within the World Wide Web. [...] The Deletionist takes the form of a JavaScript bookmarklet that automatically creates erasures from any Web pages the reader visits. A similar method has been used in Ji Lee's Wordless Web, which removes all text from Web pages, as well as applets that turn web...
Folksonomies: new media
Folksonomies: new media
  1  notes
 
09 NOV 2015 by ideonexus

 Storing Information In a URL

What is This? My Example   urlHosted is an experimental web app that misuses the part after the "#" of a URL to store and read data. The app is unhosted. See this definition from unhosted.org: Also known as "serverless", "client-side", or "static" web apps, unhosted web apps do not send your user data to their server. Either you connect your own server at runtime, or your data stays within the browser. This means this app neither stores nor sends any of your data to any server. Inst...
Folksonomies: hacking encryption
Folksonomies: hacking encryption
  1  notes

Store the content of a page inside the URL and have this site render it.

17 MAR 2014 by ideonexus

 Spider Trap

A spider trap (or crawler trap) is a set of web pages that may intentionally or unintentionally be used to cause a web crawler or search bot to make an infinite number of requests or cause a poorly constructed crawler to crash. Web crawlers are also called web spiders, from which the name is derived. Spider traps may be created to "catch" spambots or other crawlers that waste a website's bandwidth. They may also be created unintentionally by calendars that use dynamic pages with l...
Folksonomies: computer science hacking
Folksonomies: computer science hacking
  1  notes

An infinite-recursion website that lures web-crawlers into an infinite-indexing loop.

02 JAN 2011 by ideonexus

 The Importance of Web Topology

Web topology contains more complexity than simple linear chains. In this section, we will discuss attempts to measure the global structure of the Web, and how individual webpages fit into that context. Are there interesting representations that define or suggest important properties? For example, might it be possible to map knowledge on theWeb? Such a map might allow the possibility of understanding online communities, or to engage in 'plume tracing' - following a meme, or idea, or rumour, or...
  1  notes

Mapping the web allows us to find patterns in it, with potential applications.